Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Optimization algorithm of ship dispatching in container terminals with two-way channel
ZHENG Hongxing, ZHU Xutao, LI Zhenfei
Journal of Computer Applications    2021, 41 (10): 3049-3055.   DOI: 10.11772/j.issn.1001-9081.2020121973
Abstract304)      PDF (636KB)(201)       Save
For the problems of encountering and overtaking in the process of in-and-out port of ships in the container terminals with two-way channel, a new ship dispatching optimization algorithm focusing on the service rules was proposed. Firstly, the realistic constraints of two-way channel and the safety regulations of port night sailing were considered at the same time. Then, a mixed integer programming model with the goal of minimizing the total waiting time of ships in the terminal was constructed to obtain the optimal in-and-out port sequence of ships. Finally, the branch-cut algorithm with embedded polymerization strategy was designed to solve the model. The numerical experimental results show that, the average relative deviation between the result of the branch-cut algorithm using embedded polymerization strategy and the lower bound is 2.59%. At the same time, compared with the objective function values obtained by the simulated annealing algorithm and quantum differential evolution algorithm, the objective function values obtained by the proposed branch-cut algorithm are reduced by 23.56% and 17.17% respectively, which verifies the effectiveness of the proposed algorithm. The influences of different safe time intervals of ship arriving the port and ship type proportions were compared in the sensitivity analysis of the scheme obtained by the proposed algorithm, providing the decision and support for ship dispatching optimization in container terminals with two-way channel.
Reference | Related Articles | Metrics
Subgraph isomorphism matching algorithm based on neighbor information aggregation
XU Zhoubo, LI Zhen, LIU Huadong, LI Ping
Journal of Computer Applications    2021, 41 (1): 43-47.   DOI: 10.11772/j.issn.1001-9081.2020060935
Abstract445)      PDF (755KB)(379)       Save
Graph matching is widely used in reality, of which subgraph isomorphic matching is a research hotspot and has important scientific significance and practical value. Most existing subgraph isomorphism algorithms build constraints based on neighbor relationships, ignoring the local neighborhood information of nodes. In order to solve the problem, a subgraph isomorphism matching algorithm based on neighbor information aggregation was proposed. Firstly, the aggregated local neighborhood information of the nodes was obtained by importing the graph attributes and structure into the improved graph convolutional neural network to perform the representation learning of feature vector. Then, the efficiency of the algorithm was improved by optimizing the matching order according to the characteristics such as the label and degree of the graph. Finally, the Constraint Satisfaction Problem (CSP) model of subgraph isomorphism was established by combining the obtained feature vector and the optimized matching order with the search algorithm, and the model was solved by using the CSP backtracking algorithm. Experimental results show that the proposed algorithm significantly improves the solving efficiency of subgraph isomorphism compared with the traditional tree search algorithm and constraint solving algorithm.
Reference | Related Articles | Metrics
Review of speech segmentation and endpoint detection
YANG Jian, LI Zhenpeng, SU Peng
Journal of Computer Applications    2020, 40 (1): 1-7.   DOI: 10.11772/j.issn.1001-9081.2019061071
Abstract762)      PDF (1105KB)(957)       Save
Speech segmentation is an indispensable basic work in speech recognition and speech synthesis, and its quality has a great impact on the following system. Although manual segmentation and labeling is of high accuracy, it is quite time-consuming and laborious, and requires domain experts to deal with. As a result, automatic speech segmentation has become a research hotspot in speech processing. Firstly, aiming at current progress of automatic speech segmentation, several different classification methods of speech segmentation were explained. The alignment-based methods and boundary detection-based methods were introduced respectively, and the neural network speech segmentation methods, which can be applied in the above two frameworks, were expounded in detail. Then, some new speech segmentation technologies based on the methods such as bio-inspiration signal and game theory were introduced, and the performance evaluation metrics widely used in the speech segmentation field were given, and these evaluation metrics were compared and analyzed. Finally, the above contents were summarized and the future important research directions of speech segmentation were put forward.
Reference | Related Articles | Metrics
Secure ranked search scheme based on Simhash over encrypted data
LI Zhen, YAO Hanbing, MU Yicheng
Journal of Computer Applications    2019, 39 (9): 2623-2628.   DOI: 10.11772/j.issn.1001-9081.2019020269
Abstract495)      PDF (873KB)(220)       Save

Concerning the large computation and low search efficiency in ciphertext retrieval, a secure ranked search scheme based on Simhash was proposed. In this scheme, a Secure Multi-keyword Ranked search Index (SMRI) was constructed based on the dimensionality reduction idea of Simhash, the documents were processed into fingerprints and vectors, the B+ tree was built with the segmented fingerprints and encrypted vectors and the "filter-refine" strategy was adopted to searching and sorting. Firstly, the candidate result set was obtained by matching the segmented fingerprints to perform the quick retrieval, then the top-k results were ranked by calculating the Hamming distance and the vector inner product between candidate result set and query trapdoor, and the Simhash algorithm with secret key and the Secure k-Nearest Neighbors (SkNN) algorithm ensured the security of the retrieval process. Simulation results show that compared with the method based on Vector Space Model (VSM), the SMRI-based ranked search scheme has lower computational complexity, saves time and space cost, and has higher search efficiency. It is suitable for fast and secure retrieval of massive ciphertext data.

Reference | Related Articles | Metrics
On-line fabric defect recognition algorithm based on deep learning
WANG Lishun, ZHONG Yong, LI Zhendong, HE Yilong
Journal of Computer Applications    2019, 39 (7): 2125-2128.   DOI: 10.11772/j.issn.1001-9081.2019010110
Abstract850)      PDF (681KB)(398)       Save

On-line detection of fabric defects is a major problem faced by textile industry. Aiming at the problems such as high false positive rate, high false negative rate and low real-time in the existing detection of fabric defects, an on-line detection algorithm for fabric defects based on deep learning was proposed. Firstly, based on GoogLeNet network architecture, and referring to classical algorithm of other classification models, a fabric defect classification model suitable for actual production environment was constructed. Secondly, a fabric defect database was set up by using different kinds of fabric pictures marked by quality inspectors, and the database was used to train the fabric defect classification model. Finally, the images collected by high-definition camera on fabric inspection machine were segmented, and the segmented small images were sent to the trained classification model in batches to realize the classification of each small image. Thereby the defects were detected and their positions were determined. The model was validated on a fabric defect database. The experimental results show that the average test time of each small picture is 0.37 ms by this proposed model, which is 67% lower than that by GoogLeNet, 93% lower than that by ResNet-50, and the accuracy of the proposed model is 99.99% on test set, which shows that its accuracy and real-time performance meet actual industrial demands.

Reference | Related Articles | Metrics
RMB exchange rate forecast embedded with Internet public opinion intensity
WANG Jixiang, GUO Yi, QI Tianmei, WANG Zhihong, LI Zhen, TANG Minwei
Journal of Computer Applications    2019, 39 (11): 3403-3408.   DOI: 10.11772/j.issn.1001-9081.2019040726
Abstract460)      PDF (914KB)(410)       Save
Aiming at the low prediction effect caused by single data source in the current RMB exchange rate forecast research, a forecast technology based on Internet public opinion intensity was proposed. By comparing and analyzing various data sources, the forecast error of RMB exchange rate was effectively reduced. Firstly, the Internet foreign exchange news data and historical market data were fused, and the multi-source text data were converted into the computable vectors. Secondly, five feature combinations based on sentiment feature vectors were constructed and compared, and the feature combination embedded with intensity of Internet public opinion was given as the input of forecast models. Finally, a temporal sliding window of foreign exchange public opinion data was designed, and an exchange rate forecast model based on machine learning was built. Experimental results show that feature combination embedded with Internet public opinion outperforms the feature combination without public opinion by 9.8% and 16.2% in Root Mean Squared Error (RMSE) and Mean Squared Error (MAE). At the same time, the forecast model based on Long Short-Term Memory network (LSTM) is better than that based on Support Vector Regression (SVR), Decision Tree regression (DT) and Deep Neural Network (DNN).
Reference | Related Articles | Metrics
Optimization of source code search based on multi-feature weight assignment
LI Zhen, NIU Jun, WANG Kui, XIN Yuanyuan
Journal of Computer Applications    2018, 38 (3): 812-817.   DOI: 10.11772/j.issn.1001-9081.2017082043
Abstract571)      PDF (968KB)(481)       Save
It is a precondition of achieving code reuse to search open source code accurately. The current methods based on keyword search only concern matching function signatures. Considering the source code comments on the semantic description of the method's function, a method based on keyword search was proposed, which took into account code comments. The features of code, such as function signatures and different types of comments, were identified from the generated abstract syntax tree of source code; the code features and query statements were transformed into vectors respectively, and then based on the cosine similarity between the vectors, the scoring mechanism of multi-feature weight assignment to the results was created. According to the scores, an ordered list of relevant functions was obtained that reflects the associations between code features in the functions and a query. The experimental results demonstrate that the accuracy of search results can be improved by using multiple code features with different weights.
Reference | Related Articles | Metrics
BGN type outsourcing the decryption of attribute-based encryption ciphertexts
LI Zhenlin, ZHANG Wei, BAI Ping, WANG Xu'an
Journal of Computer Applications    2017, 37 (8): 2287-2291.   DOI: 10.11772/j.issn.1001-9081.2017.08.2287
Abstract943)      PDF (765KB)(1050)       Save
Cloud computing security is the key bottleneck that restricts its development, and access control on the result of cloud computing is a hot spot of current research. Based on the classical homomorphic encryption BGN (Boneh-Goh-Nissim) scheme, and combined with outsourcing the decryption of Ciphertext-Policy Attribute-Based Encryption (CP-ABE) ciphertexts, a BGN type outsourcing the decryption of ABE ciphertexts was constructed. In the scheme, partial decryption of ciphertexts was outsourced to the cloud, and only the users whose attributes meet the access policy could get the correct decryption result, thus reducing the storage and computation overhead of users. Compared with the existing outsourcing schemes of ABE, the proposed scheme can operate on ciphertexts for arbitrary additions and one multiplication. Finally, the security of the scheme was analyzed. The proposed scheme is semantically secure under the subgroup decision assumption, and its attribute security is proved under random oracle model.
Reference | Related Articles | Metrics
Storage load balancing algorithm based on storage entropy
ZHOU Weibo, ZHONG Yong, LI Zhendong
Journal of Computer Applications    2017, 37 (8): 2209-2213.   DOI: 10.11772/j.issn.1001-9081.2017.08.2209
Abstract504)      PDF (807KB)(432)       Save
In the distributed storage system, Disk space Utilization (DU) is generally used to measure the load balance of each storage node. When given the equal disk space utilization to each node, the balance of storage load is achieved in the whole distributed storage system. However, in practice, the storage node with relatively low disk I/O speed and reliability becomes a bottleneck for the performance of data I/O in the whole storage system. Therefore in heterogeneous distributed storage system and specially the system which has great differences in disk I/O speed and reliability of each storage node, the speed of data I/O is definitely limited when disk space utilization is the only evaluation criteria of storage load balance. A new idea based on read-write efficiency was proposed to measure the storage load balance in the distributed storage system. According to the definition of Storage Entropy (SE) given by the theory of load balance and entropy, a kind of load balance algorithm based on SE was proposed. With system load and single node load determination as well as load shifting, the quantitative adjustment for storage load of the distributed storage system was achieved. The proposed algorithm was tested and compared with the load balance algorithm based on disk space utilization. Experimental results show that the proposed algorithm can balance storage load well in the distributed storage system, which effectively restrains the system load imbalance and improves the overall efficiency of reading and writing of the distributed storage system.
Reference | Related Articles | Metrics
Dictionary learning algorithm based on Fisher discriminative criterion constraint of atoms
LI Zhengming, YANG Nanyue, CEN Jian
Journal of Computer Applications    2017, 37 (6): 1716-1721.   DOI: 10.11772/j.issn.1001-9081.2017.06.1716
Abstract608)      PDF (1114KB)(628)       Save
In order to improve the discriminative ability of dictionary, a dictionary learning algorithm based on Fisher discriminative criterion constraint of the atoms was proposed, which was called Fisher Discriminative Dictionary Learning of Atoms (AFDDL). Firstly, the specific class dictionary learning algorithm was used to assign a class label to each atom, and the scatter matrices of within-class atoms and between-class atoms were calculated. Then, the difference between within-class scatter matrix and between-class scatter matrix was taken as the Fisher discriminative criterion constraint to maximize the differences of between-class atoms. The difference between the same class atoms was minimized when the autocorrelation was reduced, which made the same class atoms reconstruct one type of samples as much as possible and improved the discriminative ability of dictionary. The experiments were carried out on the AR face database, FERET face database, LFW face database and the USPS handwriting database. The experimental results show that, on the four image databases, the proposed algorithm has higher recognition rate and less training time compared with the Label Consistent K-means-based Singular Value Decomposition (LC-KSVD) algorithm, Locality Constrained and Label Embedding Dictionary Learning (LCLE-DL) algorithm, Support Vector Guided Dictionary Learning (SVGDL) algorithm, and Fisher Discriminative Dictionary Learning (FDDL) algorithm. And on the four image databases, the proposed algorithm has higher recognition rate compared with Sparse Representation based Classification (SRC) and Collaborative Representation based Classification (CRC).
Reference | Related Articles | Metrics
Prediction of eight-class protein secondary structure based on deep learning
ZHANG Lei, LI Zheng, ZHENG Fengbin, YANG Wei
Journal of Computer Applications    2017, 37 (5): 1512-1515.   DOI: 10.11772/j.issn.1001-9081.2017.05.1512
Abstract1233)      PDF (644KB)(927)       Save
Predicting protein secondary structure is an important issue in structural biology. Aiming at the prediction of eight-class protein secondary structure, a novel deep learning prediction algorithm was proposed by combining recurrent neural network and feed-forward neural network. A bidirectional recurrent neural network was used to model locality and long-range interaction between amino acid residues in protein. In order to predict the eight-class protein secondary structure, the outputs of the hidden layer in the bidirectional recurrent neural network were further fed to the three-layer feed-forward neural network. Experimental results show that the proposed method achieves Q 8 accuracy of 67.9% on the CB513 dataset, which is significantly better than SSpro8 and SC-GSN (Supervised Convolutional-Generative Stochastic Network).
Reference | Related Articles | Metrics
Predicate encryption scheme supporting secure multi-party homomorphic multiplicative computation
LI Zhenlin, ZHANG Wei, DAI Xiaoming
Journal of Computer Applications    2017, 37 (4): 999-1003.   DOI: 10.11772/j.issn.1001-9081.2017.04.0999
Abstract616)      PDF (746KB)(489)       Save
In the traditional Secure Multi-party Computation (SMC), each participant can obtain the final result, but this coarse-grained access control may not be suitable for the requirements of specific users to decrypt ciphertexts, thus a new encryption scheme which has more accurate access control on the decryption authority of computation results was put forward. Combined with predicate encryption, a predicate encryption scheme with multiplicative homomorphic property for the secure multi-party computation was constructed. Compared with the existing predicate encryption, it supports the homomorphic operation, and is more accurate in access control on the decryption authority of computation results. In the current background of cloud environment, the secure multi-party computation of more fine-grained access control on computation results is realized, which is proved secure under INDistinguishable Attribute-Hiding against Chosen Plaintext Attacks (IND-AH-CPA).
Reference | Related Articles | Metrics
Virtual network embedding algorithm based on multi-objective particle swarm optimization
LI Zhen, ZHENG Xiangwei, ZHANG Hui
Journal of Computer Applications    2017, 37 (3): 755-759.   DOI: 10.11772/j.issn.1001-9081.2017.03.755
Abstract527)      PDF (940KB)(497)       Save
In virtual network mapping, most studies only consider one mapping object, which can not reflect the interests of many aspects. To solve this problem, a Virtual Network Embedding algorithm based on Multi-objective Particle Swarm Optimization (VNE-MOPSO) was proposed by combining multi-objective algorithm and Particle Swarm Optimization (PSO) algorithm. Firstly, the crossover operator was introduced into the basic PSO algorithm to expand the search space of population optimization. Secondly, the non-dominated sorting and crowding distance sorting were introduced into the multi-objective optimization algorithm, which can speed up the population convergence. Finally, by minimizing both the cost and the node load balance degree as the virtual network mapping objective function, a multi-objective PSO algorithm was proposed to solve the Virtual Network Mapping Problem (VNMP). The experimental results show that the proposed algorithm can solve the VNMP, which has advantages in network request acceptance rate, average cost, average node load balance degree, and infrastructure provider's profit.
Reference | Related Articles | Metrics
Many-objective optimization algorithm based on linear weighted minimal/maximal dominance
ZHU Zhanlei, LI Zheng, ZHAO Ruilian
Journal of Computer Applications    2017, 37 (10): 2823-2827.   DOI: 10.11772/j.issn.1001-9081.2017.10.2823
Abstract556)      PDF (923KB)(518)       Save
In Many-objective Optimization Problems (MaOP), the Pareto dominance has exponential increase of non-dominated solutions and the decrease of selection pressure with increasing optimization objectives. To solve these issues, a new type of dominance, namely Linear Weighted Minimal/Maximal dominance (LWM-dominance) was proposed based on the ideas of comparing multi-objective solutions by using linear weighted aggregation and Pareto dominance. It is theoretically proved that LWM non-dominated solution set is a subset of Pareto non-dominated solution set, meanwhile the important corner solutions are reserved. Furthermore, an MaOP algorithm based on LWM dominance was presented. The empirical studies proved the corollaries of the proposed LWM dominance. In detail, the experimental results in random objective space show that the LWM dominance is suitable for the MaOPs with 5-15 objectives; the experiment on comparing the number of LWM non-dominated solutions and Pareto non-dominated solutions with subjects of DTLZ1-DTLZ7 shows that the proportion of non-dominated solutions decreases by about 17% on average when the number of optimization objectives is 10 and 15.
Reference | Related Articles | Metrics
Software protection method based on monitoring attack threats
TANG Zhanyong, LI Zhen, ZHANG Cong, GONG Xiaoqing, FANG Dingyi
Journal of Computer Applications    2017, 37 (1): 120-127.   DOI: 10.11772/j.issn.1001-9081.2017.01.0120
Abstract478)      PDF (1263KB)(443)       Save
To increase the difficulty of software reverse analysis and improve software security, a software protection method based on monitoring attack threats was proposed. By deploying the threat-monitoring net, a variety of threats in software execution process could be real-time detected and resolved, so that the software is in a relatively safe environment and difficult to be reversely analyzed. There are three main research aspects in this protection scheme:1) Attack threat description. The potential attack threats were analyzed and then they were described with a triple . 2) Deployment of threat-monitoring net. The node base was constructed after analyzing the feature of each threat and designing the corresponding detection methods. The reasonable deployment scheme based on characteristics of nodes was selected, and these nodes were deployed effectively into different places of software. 3) Prototype system implementation and experimental design. According to the idea of this protection scheme, a prototype system was implemented, and a group of test cases was protected with the system to collect the experimental data. The evaluation on the aspects of performance consumption and security was made. The final result shows the proposed method is feasible and effective to protect software.
Reference | Related Articles | Metrics
Pheromone updating strategy of ant colony algorithm for multi-objective test case prioritization
XING Xing, SHANG Ying, ZHAO Ruilian, LI Zheng
Journal of Computer Applications    2016, 36 (9): 2497-2502.   DOI: 10.11772/j.issn.1001-9081.2016.09.2497
Abstract576)      PDF (981KB)(431)       Save
The Ant Colony Optimization (ACO) has slow convergence and is easily trapped in local optimum when solving Multi-Objective Test Case Prioritization (MOTCP). Thus, a pheromone updating strategy based on Epistatic-domain Test case Segment (ETS) was proposed. In the scheme, ETS existed in the test case sequence was selected as a pheromone updating scope, because ETS can determine the fitness value. Then, according to the fitness value increment between test cases and execution time of test cases in ETS, the pheromone on the trail was updated. In order to further improve the efficiency of ACO and reduce time consumption when ants visited test cases one by one, the end of ants' visiting was reset by estimating the length of ETS using optimized ACO. The experimental results show that compared with the original ACO and NSGA-Ⅱ, the optimized ACO has faster convergence and obtains better Pareto optimal solution sets in MOTCP.
Reference | Related Articles | Metrics
GSW-type hierarchical identity-based fully homomorphic encryption scheme from learning with errors
DAI Xiaoming, ZHANG Wei, ZHENG Zhiheng, LI Zhenlin
Journal of Computer Applications    2016, 36 (7): 1856-1860.   DOI: 10.11772/j.issn.1001-9081.2016.07.1856
Abstract530)      PDF (779KB)(384)       Save
Focusing on the function defect of the traditional Identity-Based Encryption (IBE) scheme that the ciphertexts can not be calculated directly, a new IBE scheme was proposed. The homomorphism transformation mechanism proposed by Gentry was used to transform the hierarchical IBE scheme proposed by Agrawal into a homomorphic hierarchical IBE scheme. Compared with the GSW (Gentry, Sahai, Waters) scheme (GENTRY C, SAHAI A, WATERS B. Homomorphic encryption from learning with errors:conceptually-simpler, asymptotically-faster, attribute-based. CRYPTO 2013:Proceedings of the 33rd Annual Cryptology Conference on Advances in Cryptology. Berlin:Springer, 2013:75-92) and CM (Clear, Mcgoldrick) scheme (CLEAR M, MCGOLDRICK C. Bootstrappable identity-based fully homomorphic encryption. CANS 2014:Proceedings of 13th International Conference on Cryptology and Network Security. Berlin:Springer, 2014:1-19), the construction method of the proposed scheme was more natural, the level of space complexity was reduced from cubic to square with higher efficiency. In the current environment of cloud computing, the proposed scheme can contribute to the transformation from theory to practice of fully homomorphic encryption scheme based on Learning With Errors (LWE) problem. The performance analysis and the verification results under the random oracle model prove the security for Indistinguishability of the Identity-Based Encryption Scheme under Chosen-Plaintext Attack (IND-ID-CPA) of the proposed scheme.
Reference | Related Articles | Metrics
Cellular automaton model of vehicle-bicycle conflict at channelized islands based on VISSIM microscopic traffic simulation software
LIAN Peikun, LI Zhenlong, RONG Jian, CHEN Ning
Journal of Computer Applications    2016, 36 (6): 1745-1750.   DOI: 10.11772/j.issn.1001-9081.2016.06.1745
Abstract514)      PDF (939KB)(443)       Save
For the complex behaviors of vehicle-bicycle conflict at the conflict zones of channelized islands, the capacity of right-turn lane which is calculated by the traditional analytical method is different with the practical condition. In order to solve the problem, a cellular automaton model based on VISSIM microscopic traffic simulation software for vehicle-bicycle conflict at channelized islands was proposed. According to the proposed rules of cellular automaton, the component object model of VISSIM was used for programming to control the velocity variation of right-turn vehicles by setting a series of detectors which were used to simulate cellular. Therefore, the closure effect of right-turn vehicles when they were in conflict with non-motorized vehicles or pedestrians could be simulated by these settings. Meanwhile, the crossing behaviors of non-motorized vehicles or pedestrians were controlled by using the priority rules of VISSIM. The simulation results show that the average relative error between the capacity of right-turn lane by the proposed model and the practical observation value was 5.45%. The experimental results show that the proposed model is better than the traditional analytical methods and can reflect the practical condition of conflict zones of of channelized islands, thus it can provide theoretical basis for the planning, design, traffic management and organization of channelized islands under the condition of mixed traffic flow.
Reference | Related Articles | Metrics
Design of measurement and control system for car body-in-white detection
LI Zhenghui, GUO Yin, ZHANG Hongbin, ZHANG Bin
Journal of Computer Applications    2016, 36 (5): 1445-1449.   DOI: 10.11772/j.issn.1001-9081.2016.05.1445
Abstract455)      PDF (722KB)(380)       Save
In order to achieve unified management and remote communication of measuring equipment in car body-in-white online visual inspection station, a measurement and control system for the car body-in-white detection was designed to improve the working efficiency. Using STM32F407 as the core, μC/OS-Ⅱ and LwIP were transplanted to build a Web server, and the Web server was set up to realize remote communication. Multithreaded tasks were established to achieve the information interaction between serial port and net port. By analyzing the data security issue in the process of data's routing and discussing the phenomenon of packet loss on transmitting, a solution was proposed. 2D normalized cross-correlation method was used to realize the image 2D positioning, and enhome the processing speed. The experimental results show that the system can provide remote communication function, reduce the cost, and improve the efficiency of equipment management.
Reference | Related Articles | Metrics
Mutation strategy based on concurrent program data racing fault
WU Yubo, GUO Junxia, LI Zheng, ZHAO Ruilian
Journal of Computer Applications    2016, 36 (11): 3170-3177.   DOI: 10.11772/j.issn.1001-9081.2016.11.3170
Abstract548)      PDF (1458KB)(405)       Save
As the low ability of triggering the data racing fault of the existing mutation operators for concurrent program in mutation testing, some new mutation strategies based on data racing fault were proposed. From the viewpoint of mutation operator designing, Lock-oriented Mutation Strategy (LMS) and Shared-variable-oriented Mutation Strategy (SMS) were introduced, and two new mutation operators that named Synchronized Lock Resting Operator (SLRO) and Move Shared Variable Operator (MSVO) were designed. From the viewpoint of mutation point selection, also a new mutation point selection strategy named Synchronized relationship pair Mutation Point Selection Strategy (SMPSS) was proposed. SLRO and MSVO mutation operators were used to inject the faults which generated by SMPSS strategy on 12 Java current libraries, and then the ability of mutants to trigger the data racing fault was checked by using Java Path Finder (JPF). The results show that the SLRO and MSVO for 12 Java libs can generate 121 and 122 effective mutants respectively, and effectiveness rates are 95.28% and 99.19% respectively. In summary, the new current mutation operators and mutation strategies can effectively trigger the data racing fault.
Reference | Related Articles | Metrics
Generation method of thread scheduling sequence based on all synchronization pairs coverage criteria
SHI Cunfeng, LI Zheng, GUO Junxia, ZHAO Ruilian
Journal of Computer Applications    2015, 35 (7): 2004-2008.   DOI: 10.11772/j.issn.1001-9081.2015.07.2004
Abstract504)      PDF (994KB)(376)       Save
Aiming at the problem of low efficiency on generating Thread Scheduling Sequence (TSS) that cover synchronization statements in multi-thread concurrent program, a TSS Generation Based on All synchronization pairs coverage criteria (TGBA) method was proposed. First, according to the synchronization statements in concurrent program, the synchronization pair and All Synchronization Pairs Coverage Criteria (APSC) were defined. Second, a construction method of Synchronization Pair Thread Graph (SPTG) was given. On that basis, TSSs that satisfied APSC were generated. Finally, by using JPF (Java PathFinder) detection tool, TSS generation experiments were conducted on four Java Library concurrent programs, and the comparison analysis of generation efficiency was conducted with general sequence generation methods of Default Scheduling (DS), Preemptive Scheduling (PS) and Cross Scheduling (CS). The experimental results illustrate that TSSs generated by TGBA method can cover all synchronization pairs compared to the DS and CS method. Moreover, when satisfying APSC, TGBA method decreases at least 19889 states and 44352 transitions compared to the PS method, and the average generation efficiency increases by 1.95 times. So TGBA method can reduce cost of state space and improve the efficiency of TSS generation.
Reference | Related Articles | Metrics
Massive terrain data storage based on HBase
LI Zhenju, LI Xuejun, XIE Jianwei, LI Yannan
Journal of Computer Applications    2015, 35 (7): 1849-1853.   DOI: 10.11772/j.issn.1001-9081.2015.07.1849
Abstract513)      PDF (807KB)(668)       Save

With the development of remote sensing technology, the data type and data volume of remote sensing data has increased dramatically in the past decades which is a challenge for traditional storage mode. A combination of quadtree and Hilbert spatial index was proposed in this paper to solve the the low storage efficiency in HBase data storage. Firstly, the research status of traditional terrain data storage and data storage based on HBase was reviewed. Secondly the design idea on the combination of quadtree and Hilbert spatial index based on managing global data was proposed. Thirdly the algorithm for calculating the row and column number based on the longitude and latitude of terrain data, and the algorithm for calculating the final Hilbert code was designed. Finally, the physical storage infrastructure for the index was designed. The experimental results illustrate that the data loading speed in Hadoop cluster improved 63.79%-78.45% compared to the single computer, the query time decreases by 16.13%-39.68% compared to the traditional row key index, the query speed is at least 14.71 MB/s which can meet the requirements of terrain data visualization.

Reference | Related Articles | Metrics
Improvement of term frequency-inverse document frequency algorithm based on Document Triage
LI Zhenjun, ZHOU Zhurong
Journal of Computer Applications    2015, 35 (12): 3506-3510.   DOI: 10.11772/j.issn.1001-9081.2015.12.3506
Abstract505)      PDF (952KB)(412)       Save
The Term Frequency-Inverse Document Frequency (TF-IDF) algorithm does not consider the importance of index items themselves in the document when computing the weights of index terms. In order to solve the problem, the users' behaviors when reading were utilized to improve the efficiency of TF-IDF. By introducing Document Triage to TF-IDF, the Interest Profile Manager (IPM)was used to collect data about users' reading behaviors, and then the document scores were computed. Since the users' annotation was quite important in the aimed text, or reflected the users' interest. The improved term weighting algorithm named Document Triage-Term Frequency-Inverse Document Frequency (DT-TF-IDF) was proposed by introducing document scores and users' annotation to TF-IDF and giving a greater weight to annotated term. The experimental results show that the recall, the precision and their harmonic mean of DT-TF-IDF are all higher than those of the traditional TF-IDF algorithm. The proposed DT-TF-IDF algorithm is more effective than TF-IDF and has improved the accuracy of the text similarity calculation.
Reference | Related Articles | Metrics
MapReduce performance model based on multi-phase dividing
LI Zhenju, LI Xuejun, YANG Sheng, LIU Tao
Journal of Computer Applications    2015, 35 (12): 3374-3377.   DOI: 10.11772/j.issn.1001-9081.2015.12.3374
Abstract557)      PDF (712KB)(327)       Save
In order to resolve the low precision and complexity problem of the existing MapReduce model caused by the reasonable phase partitioning granularity, a multi-phase MapReduce Model (MR-Model) with 5 partition granularities was proposed. Firstly, the research status of MapReduce model was reviewed. Secondly, the MapReduce job was divided into 5 phases of Read, Map, Shuffle, Reduce, Write and the specific processing time of each phase was studied. Finally, the MR-model prediction performance was tested by experiments. The experimental results show that MR-Model is suitable for the MapReduce actual job execution process. Compared with the two existing models of P-Model and H-Model, the time accuracy precision of MR-Model can be improved by 10%-30%; in the Reduce phase, its time accuracy precision can be improved by 2-3 times, the comprehensive property of the MR-Model is better.
Reference | Related Articles | Metrics
Cryptographic procedure analysis based on cryptographic library function
ZHANG Yanwen YIN Qing LI Zhenglian SHU Hui CHANG Rui
Journal of Computer Applications    2014, 34 (7): 1929-1935.   DOI: 10.11772/j.issn.1001-9081.2014.07.1929
Abstract149)      PDF (1118KB)(461)       Save

Since it's hard to analyze the cryptographic procedure using method of property scan or debugging for the variety and different implementation of cryptographic algorithms, a method was proposed based on library function prototype analysis and their calling-graph building. Library functions prototype analysis is analyzing cryptographic algorithm knowledge and library frame knowledge to form a knowledge base. Calling-graph building is building a calling-graph that reflects the function calling order according to parameter value of the functions. Finally cryptographic algorithm knowledge and library frame knowledge on the calling-graph were extracted. The method discriminated common cryptographic algorithm almost 100%, and it could not only recover cryptographic data, key and cryptographic mode, but also help to analyze the relationship between more than two cryptographic algorithms dealing with the same data. The method could be used to analyze Trojan, worm and test whether the library is used correctly.

Reference | Related Articles | Metrics
Network traffic forecasting model based on Gaussian process regression
LI Zhengang
Journal of Computer Applications    2014, 34 (5): 1251-1254.   DOI: 10.11772/j.issn.1001-9081.2014.05.1251
Abstract501)      PDF (557KB)(767)       Save

To solve the defect of traditional network traffic prediction forecasting, and obtain good forecasting results of network traffic, a network traffic forecasting model based on Gaussian Process Regression (GPR) was proposed. Firstly, the time delay and embedding dimension of network traffic were calculated to construct the learning samples of GPR, and then training samples were input to Gaussian process to learn in which Invasive Weed Optimization (IWO) algorithm was used to optimize the parameters of Gaussian process, and finally, the forecasting model of network traffic was established based on the optimal parameters, and the performance was tested by network traffic data. The results show that the proposed model can improve the forecasting precision of network traffic and it has great practical application value.

Reference | Related Articles | Metrics
Fast handover mechanism based on Delaunay triangulation for FMIPv6
LI Zhenjun LIU Xing
Journal of Computer Applications    2013, 33 (10): 2707-2710.  
Abstract505)      PDF (704KB)(593)       Save
To solve the packet loss problem caused by inaccurate prediction of New Access Router (NAR) in the Fast Handover for mobile IPv6 (FMIPv6), this paper proposed a triangulationbased fast handoff mechanism (TFMIPv6). In TFMIPv6, a triangulation algorithm was used to split the network into virtual triangle topology, and the tunnel was established among adjacent access routers. The candidate target Access Points (AP) were selected to quickly recalculate the new relay addresses for the mobile nodes, and packets were buffered in two potential NARs during handover. The experimental results illustrate that TFMIPv6 protocol achieves lower handoff latency and packet loss rate compared with FMIPv6.
Related Articles | Metrics
Joint call admission control algorithm based on reputation model
LI Zhen ZHU Lei CHEN Xushan JIANG Haixia
Journal of Computer Applications    2013, 33 (09): 2455-2459.   DOI: 10.11772/j.issn.1001-9081.2013.09.2455
Abstract599)      PDF (721KB)(344)       Save
In order to make up for the limitation of research scenario of call admission control in heterogeneous wireless networks and reduce blindness of access network selection, the scenario was extended from integrated system with two networks to integrated system with multiple networks, and a joint call admission control algorithm based on reputation model was proposed. The reputation model was applied in the network selection and feedback mechanism. On the user side, the terminals chose the access network according to the networks' reputation; on the network side, the networks made decisions by adaptive bandwidth management and buffer queuing policy to enhance the probability of successful acceptation. Simulation results show that by using the proposed algorithm, new call blocking probability and handoff call dropping probability can be reduced effectively.
Related Articles | Metrics
Robust feature selection method in high-dimensional data mining
LI Zhean CHEN Jianping ZHANG Yajuan ZHAO Weihua
Journal of Computer Applications    2013, 33 (08): 2194-2197.  
Abstract1006)      PDF (811KB)(786)       Save
According to the feature of high-dimensional data, the number of variables is usually larger than the sample size and the data are often heterogeneous, a robust and effective feature selection method was proposed by using the dimensional reduction technique of variable selection and the modal regression based estimation method. The estimation algorithm was given by using Local Quadratic Algorithm (LQA) and Expectation-Maximum (EM) algorithm, and the selection method of the parameter adjustment was also discussed. Data analysis of the simulation shows that the proposed method is overall better than the least square and median regression based regularized method. Compared with the existing methods, the proposed method has higher prediction ability and stronger robustness especially for the non-normal error distribution.
Reference | Related Articles | Metrics
Data scheduling strategy in P2P streaming system based on improved particle swarm optimization algorithm
LI Zhenxing LIU Zhuojun
Journal of Computer Applications    2013, 33 (04): 931-934.   DOI: 10.3724/SP.J.1087.2013.00931
Abstract870)      PDF (802KB)(511)       Save
Data scheduling strategy in Peer-to-Peer (P2P) media streaming is the key research of the P2P media streaming system. A Particle Swarm Optimization (PSO) algorithm was modified according to P2P streaming data scheduling features and the style of digital encoding string for the algorithm was proposed in this paper. The data scheduling strategy to choose the data chunk took account of resource urgency and scarcity degree. The modified discrete particle swarm algorithm was used to choose the peers to get the optimal scheduling peers set. In order to verify the feasibility and effectiveness of the algorithm, experiments were done to simulate the convergence of the algorithm, the scheduling time, the P2P network uplink bandwidth utilization and the load balancing of peers.
Reference | Related Articles | Metrics